9 research outputs found
Do Large Scale Molecular Language Representations Capture Important Structural Information?
Predicting the chemical properties of a molecule is of great importance in
many applications, including drug discovery and material design. Machine
learning based molecular property prediction holds the promise of enabling
accurate predictions at much less computationally complex cost when compared
to, for example, Density Functional Theory (DFT) calculations. Various
representation learning methods in a supervised setting, including the features
extracted using graph neural nets, have emerged for such tasks. However, the
vast chemical space and the limited availability of labels make supervised
learning challenging, calling for learning a general-purpose molecular
representation. Recently, pre-trained transformer-based language models on
large unlabeled corpus have produced state-of-the-art results in many
downstream natural language processing tasks. Inspired by this development, we
present molecular embeddings obtained by training an efficient transformer
encoder model, MoLFormer. This model employs a linear attention mechanism
coupled with highly parallelized training on SMILES sequences of 1.1 billion
unlabeled molecules from the PubChem and ZINC datasets. Experiments show that
the learned molecular representation outperforms supervised and unsupervised
graph neural net baselines on several regression and classification tasks from
10 benchmark datasets, while performing competitively on others. Further
analyses, specifically through the lens of attention, demonstrate that
MoLFormer indeed learns a molecule's local and global structural aspects. These
results provide encouraging evidence that large-scale molecular language models
can capture sufficient structural information to be able to predict diverse
molecular properties, including quantum-chemical propertie
Wasserstein Barycenter Model Ensembling
In this paper we propose to perform model ensembling in a multiclass or a
multilabel learning setting using Wasserstein (W.) barycenters. Optimal
transport metrics, such as the Wasserstein distance, allow incorporating
semantic side information such as word embeddings. Using W. barycenters to find
the consensus between models allows us to balance confidence and semantics in
finding the agreement between the models. We show applications of Wasserstein
ensembling in attribute-based classification, multilabel learning and image
captioning generation. These results show that the W. ensembling is a viable
alternative to the basic geometric or arithmetic mean ensembling.Comment: ICLR 201
Auditing and Generating Synthetic Data with Controllable Trust Trade-offs
Data collected from the real world tends to be biased, unbalanced, and at
risk of exposing sensitive and private information. This reality has given rise
to the idea of creating synthetic datasets to alleviate risk, bias, harm, and
privacy concerns inherent in the real data. This concept relies on Generative
AI models to produce unbiased, privacy-preserving synthetic data while being
true to the real data. In this new paradigm, how can we tell if this approach
delivers on its promises? We present an auditing framework that offers a
holistic assessment of synthetic datasets and AI models trained on them,
centered around bias and discrimination prevention, fidelity to the real data,
utility, robustness, and privacy preservation. We showcase our framework by
auditing multiple generative models on diverse use cases, including education,
healthcare, banking, human resources, and across different modalities, from
tabular, to time-series, to natural language. Our use cases demonstrate the
importance of a holistic assessment in order to ensure compliance with
socio-technical safeguards that regulators and policymakers are increasingly
enforcing. For this purpose, we introduce the trust index that ranks multiple
synthetic datasets based on their prescribed safeguards and their desired
trade-offs. Moreover, we devise a trust-index-driven model selection and
cross-validation procedure via auditing in the training loop that we showcase
on a class of transformer models that we dub TrustFormers, across different
modalities. This trust-driven model selection allows for controllable trust
trade-offs in the resulting synthetic data. We instrument our auditing
framework with workflows that connect different stakeholders from model
development to audit and certification via a synthetic data auditing report.Comment: 49 pages; submitte